Tel:400-889-7783 中文English

 

 

 

 

 

 

Contact Us

 

Tel

400-889-7783

 

E-mail

andy.liu@xyserver.cn

 

AS:

fae@xyserver.cn

 

 

 

 

 

 

 

 

 

AI TRAINING


From recognizing speech to training virtual personal assistants and teaching autonomous cars to drive, data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are exponentially growing in complexity, in a practical amount of time.

 


 

 

 

 

AI INFERENCE

 

To connect us with the most relevant information, services, and products, hyperscale companies have started to tap into AI. However, keeping up with user demand is a daunting challenge. For example, the world’s largest hyperscale company recently estimated that they would need to double their data center capacity if every user spent just three minutes a day using their speech recognition service. 

 

Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, Tesla V100 GPU delivers 30X higher inference performance than a CPU server. This giant leap in throughput and efficiency will make the scale-out of AI services practical.

 

 

 

 

HIGH PERFORMANCE COMPUTING (HPC)

 

HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by allowing researchers to analyze large volumes of data for rapid insights where simulation alone cannot fully predict the real world.

 

Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work.

 

 

SPECIFICATIONS

 

GPU Architecture NVIDIA Volta ECC yes
NVIDIA Tensor Cores 640 Interconnect Bandwidth 32 GB/sec
NVIDIA CUDA ® Cores 5120 System Interface PCIe Gen3
Double-Precision Performance 7 TFLOPS Form Factor PCIe Full Height/Length
Single-Precision Performance 14 TFLOPS Max Power Comsumption 250 W
Tensor Performance 112 TFLOPS Thermal Solution Passive
GPU Memory 16 GB HBM2 Compute APIs CUDA、DirectCompute
OpenCL ™ 、OpenACC
Memory Bandwidth 900 GB/秒

 

 

 

 

 
 

SERVER

GPU Server

Cloud Computing

Storage Server

POWER Server

NVIDIA PRODUCTS

TESLA

GTX

Jetson

DGX Station

AI

EDU-ROBOT

 

 

 

MOTHERBOARDS

Workstation Board

Server Board

Embedded

 

Beijin Sunway XinYue technology company lty

 

Tel:400-889-7783

 

ICP:京ICP备17007355号-1